指数族在机器学习中广泛使用,包括连续和离散域中的许多分布(例如,通过SoftMax变换,Gaussian,Dirichlet,Poisson和分类分布)。这些家庭中的每个家庭的分布都有固定的支持。相比之下,对于有限域而言,最近在SoftMax稀疏替代方案(例如Sparsemax,$ \ alpha $ -entmax和Fusedmax)的稀疏替代方案中导致了带有不同支持的分布。本文基于几种技术贡献,开发了连续分布的稀疏替代方案:首先,我们定义了$ \ omega $ regultion的预测图和任意域的Fenchel-young损失(可能是无限或连续的)。对于线性参数化的家族,我们表明,Fenchel-Young损失的最小化等效于统计的矩匹配,从而概括了指数家族的基本特性。当$ \ omega $是带有参数$ \ alpha $的Tsallis negentropy时,我们将获得````trabormed rompential指数)'',其中包括$ \ alpha $ -entmax和sparsemax和sparsemax($ \ alpha = 2 $)。对于二次能量函数,产生的密度为$ \ beta $ -Gaussians,椭圆形分布的实例,其中包含特殊情况,即高斯,双重量级,三人级和epanechnikov密度,我们为差异而得出了差异的封闭式表达式, Tsallis熵和Fenchel-Young损失。当$ \ Omega $是总变化或Sobolev正常化程序时,我们将获得Fusedmax的连续版本。最后,我们引入了连续的注意机制,从\ {1、4/3、3/3、3/2、2 \} $中得出有效的梯度反向传播算法。使用这些算法,我们证明了我们的稀疏连续分布,用于基于注意力的音频分类和视觉问题回答,表明它们允许参加时间间隔和紧凑区域。
translated by 谷歌翻译
Reinforcement learning (RL) has shown great promise with algorithms learning in environments with large state and action spaces purely from scalar reward signals. A crucial challenge for current deep RL algorithms is that they require a tremendous amount of environment interactions for learning. This can be infeasible in situations where such interactions are expensive; such as in robotics. Offline RL algorithms try to address this issue by bootstrapping the learning process from existing logged data without needing to interact with the environment from the very beginning. While online RL algorithms are typically evaluated as a function of the number of environment interactions, there exists no single established protocol for evaluating offline RL methods.In this paper, we propose a sequential approach to evaluate offline RL algorithms as a function of the training set size and thus by their data efficiency. Sequential evaluation provides valuable insights into the data efficiency of the learning process and the robustness of algorithms to distribution changes in the dataset while also harmonizing the visualization of the offline and online learning phases. Our approach is generally applicable and easy to implement. We compare several existing offline RL algorithms using this approach and present insights from a variety of tasks and offline datasets.
translated by 谷歌翻译
Despite the impact of psychiatric disorders on clinical health, early-stage diagnosis remains a challenge. Machine learning studies have shown that classifiers tend to be overly narrow in the diagnosis prediction task. The overlap between conditions leads to high heterogeneity among participants that is not adequately captured by classification models. To address this issue, normative approaches have surged as an alternative method. By using a generative model to learn the distribution of healthy brain data patterns, we can identify the presence of pathologies as deviations or outliers from the distribution learned by the model. In particular, deep generative models showed great results as normative models to identify neurological lesions in the brain. However, unlike most neurological lesions, psychiatric disorders present subtle changes widespread in several brain regions, making these alterations challenging to identify. In this work, we evaluate the performance of transformer-based normative models to detect subtle brain changes expressed in adolescents and young adults. We trained our model on 3D MRI scans of neurotypical individuals (N=1,765). Then, we obtained the likelihood of neurotypical controls and psychiatric patients with early-stage schizophrenia from an independent dataset (N=93) from the Human Connectome Project. Using the predicted likelihood of the scans as a proxy for a normative score, we obtained an AUROC of 0.82 when assessing the difference between controls and individuals with early-stage schizophrenia. Our approach surpassed recent normative methods based on brain age and Gaussian Process, showing the promising use of deep generative models to help in individualised analyses.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Magnetic resonance (MR) and computer tomography (CT) images are two typical types of medical images that provide mutually-complementary information for accurate clinical diagnosis and treatment. However, obtaining both images may be limited due to some considerations such as cost, radiation dose and modality missing. Recently, medical image synthesis has aroused gaining research interest to cope with this limitation. In this paper, we propose a bidirectional learning model, denoted as dual contrast cycleGAN (DC-cycleGAN), to synthesize medical images from unpaired data. Specifically, a dual contrast loss is introduced into the discriminators to indirectly build constraints between real source and synthetic images by taking advantage of samples from the source domain as negative samples and enforce the synthetic images to fall far away from the source domain. In addition, cross-entropy and structural similarity index (SSIM) are integrated into the DC-cycleGAN in order to consider both the luminance and structure of samples when synthesizing images. The experimental results indicate that DC-cycleGAN is able to produce promising results as compared with other cycleGAN-based medical image synthesis methods such as cycleGAN, RegGAN, DualGAN, and NiceGAN. The code will be available at https://github.com/JiayuanWang-JW/DC-cycleGAN.
translated by 谷歌翻译
This paper presents a methodology for combining programming and mathematics to optimize elevator wait times. Based on simulated user data generated according to the canonical three-peak model of elevator traffic, we first develop a naive model from an intuitive understanding of the logic behind elevators. We take into consideration a general array of features including capacity, acceleration, and maximum wait time thresholds to adequately model realistic circumstances. Using the same evaluation framework, we proceed to develop a Deep Q Learning model in an attempt to match the hard-coded naive approach for elevator control. Throughout the majority of the paper, we work under a Markov Decision Process (MDP) schema, but later explore how the assumption fails to characterize the highly stochastic overall Elevator Group Control System (EGCS).
translated by 谷歌翻译
为了实现良好的性能和概括性,医疗图像分割模型应在具有足够可变性的大量数据集上进行培训。由于道德和治理限制以及与标签数据相关的成本,经常对科学发展进行扼杀,并经过对有限数据的培训和测试。数据增强通常用于人为地增加数据分布的可变性并提高模型的通用性。最近的作品探索了图像合成的深层生成模型,因为这种方法将使有效的无限数据生成多种多样的数据,从而解决了通用性和数据访问问题。但是,许多提出的解决方案限制了用户对生成内容的控制。在这项工作中,我们提出了Brainspade,该模型将基于合成扩散的标签发生器与语义图像发生器结合在一起。我们的模型可以在有或没有感兴趣的病理的情况下产生完全合成的大脑标签,然后产生任意引导样式的相应MRI图像。实验表明,Brainspade合成数据可用于训练分割模型,其性能与在真实数据中训练的模型相当。
translated by 谷歌翻译
深度神经网络在医学图像分析中带来了显着突破。但是,由于其渴望数据的性质,医学成像项目中适度的数据集大小可能会阻碍其全部潜力。生成合成数据提供了一种有希望的替代方案,可以补充培训数据集并进行更大范围的医学图像研究。最近,扩散模型通过产生逼真的合成图像引起了计算机视觉社区的注意。在这项研究中,我们使用潜在扩散模型探索从高分辨率3D脑图像中生成合成图像。我们使用来自英国生物银行数据集的T1W MRI图像(n = 31,740)来训练我们的模型,以了解脑图像的概率分布,该脑图像以协变量为基础,例如年龄,性别和大脑结构量。我们发现我们的模型创建了现实的数据,并且可以使用条件变量有效地控制数据生成。除此之外,我们创建了一个带有100,000次脑图像的合成数据集,并使科学界公开使用。
translated by 谷歌翻译
我们提出了一个数据收集和注释管道,该数据从越南放射学报告中提取信息,以提供胸部X射线(CXR)图像的准确标签。这可以通过注释与其特有诊断类别的数据相匹配,这些数据可能因国家而异。为了评估所提出的标签技术的功效,我们构建了一个包含9,752项研究的CXR数据集,并使用该数据集的子集评估了我们的管道。以F1得分为至少0.9923,评估表明,我们的标签工具在所有类别中都精确而始终如一。构建数据集后,我们训练深度学习模型,以利用从大型公共CXR数据集传输的知识。我们采用各种损失功能来克服不平衡的多标签数据集的诅咒,并使用各种模型体系结构进行实验,以选择提供最佳性能的诅咒。我们的最佳模型(CHEXPERT-FRECTER EDIDENENET-B2)的F1得分为0.6989(95%CI 0.6740,0.7240),AUC为0.7912,敏感性为0.7064,特异性为0.8760,普遍诊断为0.8760。最后,我们证明了我们的粗分类(基于五个特定的异常位置)在基准CHEXPERT数据集上获得了可比的结果(十二个病理),以进行一般异常检测,同时在所有类别的平均表现方面提供更好的性能。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译